APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series.
Let s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable,
or Ubuntu groovy or hirsute will already have seen most of those changes.
New features
Various patterns related to dependencies, such as ?depends are now available (2.1.16)
The Protected field is now supported.
It replaces the previous Important field and is like Essential,
but only for installed packages
(some minor more differences maybe in terms of ordering the installs).
The update command has gained an --error-on=any option that makes it error out on any failure,
not just what it considers persistent ons.
The rred method can now be used as a standalone program to merge pdiff files
APT now implements phased updates.
Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket,
but has previously only been available to desktop users using update-manager.
Other behavioral changes
The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time,
rather than at kernel install time,
in order to correctly protect the kernel that is running now,
rather than the kernel that was running when we were installing the newest one.
It also now protects only up to 3 kernels, instead of up to 4, as was originally intended,
and was the case before 1.1 series.
This avoids /boot partitions from running out of space, especially on Ubuntu which has boot
partitions sized for the original spec.
Performance improvements
The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
The hash table size has been increased
Bug fixes
* wildcards work normally again (since 2.1.0)
The cache file now includes all translation files in /var/lib/apt/lists,
so multi-user systems with different locales correctly show translated descriptions now.
URLs are no longer dequoted on redirects only to be requoted again,
fixing some redirects where servers did not expect different quoting.
Immediate configuration is now best-effort, and failure is no longer fatal.
various changes to solver marking leading to different/better results in some cases (since 2.1.0)
The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
The HTTP method no longer infinitely retries downloads on some connection errors
The pkgnames command no longer accidentally includes source packages
Various fixes from fuzzing efforts by David
Security fixes
Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)
(all of which have been backported to all stable series,
back all the way to 1.0.9.8.* series in jessie eLTS)
Incompatibilities
N/A - there were no breaking changes in apt 2.2 that we are aware of.
Deprecations
apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added.
apt-key was made obsolete in version 0.7.25.1, released in January 2010, by
/etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files,
and was since then only intended for deleting keys in the legacy trusted.gpg keyring.
Please manage files in trusted.gpg.d yourself;
or place them in a different location such as
/etc/apt/keyrings (or make up your own, there s no standard location)
or /usr/share/keyrings,
and use signed-by in the sources.list.d files.
The legacy trusted.gpg keyring still works, but will also stop working eventually.
Please make sure you have all your keys in trusted.gpg.d.
Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d.
Future versions of APT might switch away from GPG.
As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0).
They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.
Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6.
The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.
Why get a Pixel?
Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect.
Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod.
Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don t want it to be constantly behind.
Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy.
Size and weight: OnePlus phones keep getting bigger and bigger. By today s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.
First impressions
Accessories
The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the o in Google?
Google sells you fabric cases for 45 . That seems a bit excessive, although I like that a lot of it is recycled.
Haptics
Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function.
The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it s a Model M. It just feels great.
The plastic back feels really good, it s that sort of high quality smooth plastic you used to see on those high-end Nokia devices.
The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.
Software
The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone.
There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany.
The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I ll be like Oh the font is huge , but right now, it feels a bit small on the Pixel.
You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I d need to calibrate the screen then - work!).
Migration experience
Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored.
I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup).
I ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.
Performance
It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance.
Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.
Audio
The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I ve never had this before, and it will take some time getting used to.
Final thoughts
This is a boring phone. There s no wow factor at all. It s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a only 350 phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is you guessed it, plenty.
The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620 on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn t get much use out of it, seeing as its not available anywhere I am. Because I m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.
Outlook
The big question for me is whether I ll be able to adjust to the smaller display. I now have a tablet, so I m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call.
Oh while we re talking about calls - I only have a data-only SIM in it, so I could not test calling. I m transferring to a new phone contract this month, and I ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10 discount for being an existing cable customer.
I m also looking forward to playing around with the camera (especially night sight), and eSIM. And I m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can t really judge the screen quality much, as I still have the factory protection film on it, and that s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case.
I might report back in two weeks when I have spent some more time with the device.
So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent genuis bar verdict of it s a goner a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work.
Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine s content was really (really, and check again: really) no longer needed, I got hold of it for good.
tl;dr It works just fine. It is a little heavier than I thought (and isn t air supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix ).
Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.
Initial Steps
Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck
Plug into USB, recycle power, press Option on macOS keyboard: voila
After a quick hunch no to live/test only and yes to install, whole disk
install easy, very few questions, somehow skips wifi
so activate wifi manually and everythings pretty much works
Customization
First deal with fn and ctrl key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in half a decade? Longer?
Fire up firefox, go to download chrome , install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.
syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop
keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.
Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.
Added emacs via apt, so far empty , so config files yet
Added ssh via apt, need to propagate keys to github and gitlab
Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio
Added wajig (apt frontend) and byobu, both via apt
Created ssh key, shipped it to server and github + gitlab
Cloned (not-public) dotfiles repo and linked some dotfiles in
People love to tell you that they "don't watch sports" but the story of Lance Armstrong provides a fascinating lens through which to observe our culture at large.
For example, even granting all that he did and all the context in which he did it, why do sports cheats act like a lightning rod for such an instinctive hatred? After all, the sheer level of distaste directed at people such as Lance eludes countless other criminals in our society, many of whom have taken a lot more with far fewer scruples. The question is not one of logic or rationality, but of proportionality.
In some ways it should be unsurprising. In all areas of life, we instinctively prefer binary judgements to moral ambiguities and the sports cheat is a clich of moral bankruptcy cheating at something so seemingly trivial as a sport actually makes it more, not less, offensive to us. But we then find ourselves strangely enthralled by them, drawn together in admiration of their outlaw-like tenacity, placing them strangely close to criminal folk heroes. Clearly, sport is not as unimportant as we like to claim it is. In Lance's case in particular though, there is undeniably a Shakespearean quality to the story and we are forced to let go of our strict ideas of right and wrong and appreciate all the nuance.
There is a lot of this nuance in Marina Zenovich's new documentary. In fact, there's a lot of everything. At just under four hours, ESPN's Lance combines the duration of a Tour de France stage with the depth of the peloton an endurance event compared to the bite-sized hagiography of Michael Jordan's The Last Dance.
Even for those who follow Armstrong's story like a mini-sport in itself, Lance reveals new sides to this man for all seasons. For me, not only was this captured in his clumsy approximations at being a father figure but also in him being asked something I had not read in countless tell-all books: did his earlier experiments in drug-taking contribute to his cancer?
But even in 2020 there are questions that remain unanswered. By needlessly returning to the sport in 2009, did Lance subconsciously want to get caught? Why does he not admit he confessed to Betsy Andreu back in 1999 but will happily apologise to her today for slurring her publicly on this very point? And why does he remain so vindictive towards former-teammate Floyd Landis? In all of Armstrong's evasions and masterful control of the narrative, there is the gnawing feeling that we don't even know what questions we should be even asking. As ever, the questions are more interesting than the answers.
Lance also reminded me of how professional cycling's obsession with national identity. Although I was intuitively aware of it to some degree, I had not fully grasped how much this kind of stereotyping runs through the veins of the sport itself, just like the drugs themselves. Journalist Daniel Friebe first offers us the portrait of:
Spaniards tend to be modest, very humble. Very unpretentious. And the Italians are loud, vain and outrageous showmen.
Former directeur sportif Johan Bruyneel then asserts that "Belgians are hard workers... they are ambitious to a certain point, but not overly ambitious", and cyclist J rg Jaksche concludes with:
The Germans are very organised and very structured. And then the French, now I have to be very careful because I am German, but the French are slightly superior.
This kind of lazy caricature is nothing new, especially for those brought up on a solid diet of Tintin and Asterix, but although all these examples are seemingly harmless, why does the underlying idea of ascribing moral, social or political significance to genetic lineage remain so durable in today's age of anti-racism? To be sure, culture is not quite the same thing as race, but being judged by the character of one's ancestors rather than the actions of an individual is, at its core, one of the many conflations at the heart of racism. There is certainly a large amount of cognitive dissonance at work, especially when Friebe elaborates:
East German athletes were like incredible robotic figures, fallen off a production line somewhere behind the Iron Curtain...
... but then bermensch Jan Ullrich is immediately described as "emotional" and "struggled to live the life of a professional cyclist 365 days a year". We see the habit to stereotype is so ingrained that even in the face of this obvious contradiction, Friebe unironically excuses Ullrich's failure to live up his German roots due to him actually being "Mediterranean".
I mention all this as I am known within my circles for remarking on these national characters, even collecting stereotypical examples of Italians 'being Italian' and the French 'being French' at times. Contrary to evidence, I don't believe in this kind of innate quality but what I do suspect is that people generally behave how they think they ought to behave, perhaps out of sheer imitation or the simple pleasure of conformity. As the novelist Will Self put it:
It's quite a complicated collective imposture, people pretending to be British and people pretending to be French, and then they get really angry with each other over what they're pretending to be.
The really remarkable thing about this tendency is that even if we consciously notice it there is no seemingly no escape even I could not smirk when I considered that a brash Texan winning the Tour de France actually combines two of America's cherished obsessions: winning... and annoying the French.
I've been digging into async exceptions in haskell, and getting more
and more concerned. In particular, bracket seems to be often used in ways
that are not async exception safe. I've found multiple libraries with problems.
Here's an example:
withTempFile a = bracket setup cleanup a
where
setup = openTempFile "/tmp" "tmpfile"
cleanup (name, h) = do
hClose h
removeFile name
This looks reasonably good, it makes sure to clean up after itself even
when the action throws an exception.
But, in fact that code can leave stale temp files lying around.
If the thread receives an async exception when hClose is
running, it will be interrupted before the file is removed.
We normally think of bracket as masking exceptions, but it
doesn't prevent async exceptions in all cases.
See Control.Exception on "interruptible operations",
which can receive async exceptions even when other exceptions are masked.
It's a bit surprising, but hClose is such an interruptable operation,
because it flushes the write buffer. The only way to know is to
read the code.
It can be quite hard to determine if an operation is interruptable, since
it can come down to whether it retries a STM transaction, or uses a MVar
that is not always full. I've been auditing libraries and I often have
to look at code several dependencies away, and even then may not be sure
if a library has this problem.
process's withCreateProcess
could fail to wait on the process, leaving a zombie. Might also leak
file descriptors?
http-client's withResponse
might fail to close a network connection. (If a MVar happened to be empty
when it's called.)
Worth noting that there are plenty of examples of using http-client
to eg, race downloading two urls and cancel the slower download. Which
is just the kind of use of an async exception that could cause a problem.
persistent's withSqlPool and withSqlConn might fail to clean up,
when used with persistent-postgresql. (If another thread is using the
connection and so a MVar over in postgresql-simple is empty.)
concurrent-output has some locking code that is not async exception safe.
(My library, so I've fixed part of it, and hope to fix the rest.)
So far, around half of the libraries I've looked at, that use bracket
or onException or the like probably have this problem.
What can libraries do?
Document whether these things are async exception safe. Or perhaps
there should be an expectation that "withFoo" always is, but if so the
Haskell comminity has some work ahead of it.
Use finally. Good mostly in simple situations; more complicated things
would be hard to write this way.
hClose h finally removeFile name
Use uninterruptibleMask, but it's a big hammer and is often not the
right tool for the job. If the operation takes a while to run,
the program will not respond to ctrl-c during that time.
May be better to run the actions in worker threads, to insulate
them from receiving any async exceptions.
bracketInsulated :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c
bracketInsulated a b = bracket
(uninterruptibleMask $ \u -> async (u a) >>= u . wait)
(\v -> uninterruptibleMask $ \u -> async (u (b v)) >>= u . wait)
(Note use of uninterruptibleMask here in case async itself does an
interruptable operation. My first version got that wrong.. This is hard!)
My impression of the state of things now is that you should be very cautious
using race or cancel or withAsync or the like, unless the thread is small
and easy to audit for these problems. Kind of a shame, since I had wanted to
be able to cancel a thread that is big and sprawling and uses all the
libraries mentioned above.
This work was sponsored by Jake Vosloo and Graham Spencer
on Patreon.
Sporting a beautiful 10.1 1920x1200 display, the Lenovo IdeaPad Duet
Chromebook or Duet Chromebook, is one of the latest Chromebooks released,
and one of the few slate-style tablets, and it s only about 300 EUR (300 USD).
I ve had one for about 2 weeks now, and here are my thoughts.
Build & Accessories
The tablet is a fairly Pixel-style affair, in that the back has two components,
one softer blue one housing the camera and a metal feeling gray one. Build quality
is fairly good.
The volume and power buttons are located on the right side of the tablet, and
this is one of the main issues: You end up accidentally pressing the power button
when you want to turn your volume lower, despite the power button having a different
texture.
Alongside the tablet, you also find a kickstand with a textile back, and a
keyboard, both of which attach via magnets (and pogo pins for the keyboard).
The keyboard is crammed, with punctuation keys being halfed in size, and it
feels mushed compared to my usual experiences of ThinkPads and Model Ms, but
it s on par with other Chromebooks, which is surprising, given it s a tablet
attachment.
I mostly use the Duet as a tablet, and only attach the keyboard occasionally.
Typing with the keyboard on your lap is suboptimal.
My first Duet had a few bunches of dead pixels, so I returned it, as I had
a second one I could not cancel ordered as well. Oh dear. That one was fine!
Hardware & Connectivity
The Chromebook Duet is powered by a Mediatek Helio P60T SoC, 4GB of RAM,
and a choice of 64 or 128 GB of main storage.
The tablet provides one USB-C port for charging, audio output (a 3.5mm adapter
is provided in the box), USB hub, and video output; though, sadly, the latter
is restricted to a maximum of 1080p30, or 1440x900 at 60 Hz. It can be charged
using the included 10W charger, or use up to I believe 18W from a higher powered
USB-C PD charger. I ve successfully used the Chromebook with a USB-C monitor
with attached keyboard, mouse, and DAC without any issues.
On the wireless side, the tablet provides 2x2 Wifi AC and Bluetooth 4.2. WiFi
reception seemed just fine, though I have not done any speed testing, missing
a sensible connection at the moment. I used Bluetooth to connect to my smartphone
for instant tethering, and my Sony WH1000XM2 headphones, both of which worked
without any issues.
The screen is a bright 400 nit display with excellent viewing angles,
and the speakers do a decent job, meaning you can use easily use this
for watching a movie when you re alone in a room and idling around. It has
a resolution of 1920x1200.
The device supports styluses following the USI standard. As of right now, the
only such stylus I know about is an HP one, and it costs about 70 or so.
Cameras are provided on the front and the rear, but produce terrible images.
Software: The tablet experience
The Chromebook Duet runs Chrome OS, and comes with access to Android apps
using the play store (and sideloading in dev mode) and access to full Linux
environments powered by LXD inside VMs.
The screen which has 1920x1200 is scaled to a ridiculous 1080x675 by default
which is good for being able to tap buttons and stuff, but provides next to no
content. Scaling it to 1350x844 makes things more balanced.
The Linux integration is buggy. Touches register in different places than where
they happened, and the screen is cut off in full screen extremetuxracer, making
it hard to recommend for such uses.
Android apps generally work fine. There are some issues with the back gesture
not registering, but otherwise I have not found issues I can remember.
One major drawback as a portable media consumption device is that Android apps
only work in Widevine level 3, and hence do not have access to HD content, and
the web apps of Netflix and co do not support downloading. Though one of the Duets
actually said L1 in check apps at some point (reported in issue 1090330).
It s also worth
noting that Amazon Prime Video only renders in HD, unless you change your
user agent to say you are Chrome on Windows - bad Amazon!
The tablet experience also lags in some other ways, as the palm rejection is
overly extreme, causing it to reject valid clicks close to the edge of the display
(reported in issue 1090326).
The on screen keyboard is terrible. It only does one language at a time, forcing
me to switch between German and English all the time, and does not behave as you d
expect it when editing existing words - it does not know about them and thinks
you are starting a new one. It does provide a small keyboard that you can
move around, as well as a draw your letters keyboard, which could come in
handy for stylus users, I guess. In any case, it s miles away from gboard on
Android.
Stability is a mixed bag right now. As of Chrome OS 83, sites (well only Disney+
so far ) sometimes get killed with SIGILL or SIGTRAP, and the device rebooted
on its own once or twice. Android apps that use the DRM sometimes do not start,
and the Netflix Android app sometimes reports it cannot connect to the servers.
Performance
Performance is decent to sluggish, with micro stuttering in a lot of places. The
Mediatek CPU is comparable to Intel Atoms, and with only 4GB of RAM, and an entire
Android container running, it s starting to show how weak it is.
I found that Google Docs worked perfectly fine, as did websites such as
Mastodon, Twitter, Facebook. Where the device really struggled was Reddit,
where closing or opening a post, or getting a reply box could take 5 seconds
or more. If you are looking for a Reddit browsing device, this is not for you.
Performance in Netflix was fine, and Disney+ was fairly slow but still usable.
All in all, it s acceptable, and given the price point and the build quality,
probably the compromise you d expect.
Summary
tl;dr:
good: Build quality, bright screen, low price, included accessories
The Chromebook Duet or IdeaPad Duet Chromebook is a decent tablet that is built
well above its price point. It s lackluster performance and DRM woes make it
hard to give a general recommendation, though. It s not a good laptop.
I can see this as the perfect note taking device for students, and as a
cheap tablet for couch surfing, or as your on-the-go laptop replacement,
if you need it only occasionally.
I cannot see anyone using this as their main laptop, although I guess some
people only have phones these days, so: what do I know?
I can see you getting this device if you want to tinker with Linux on ARM, as
Chromebooks are quite nice to tinker with, and a tablet is super nice.
Welcome to the May 2020 report from the Reproducible Builds project.
One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. Nonetheless, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.
In these reports we outline the most important things that we and the rest of the community have been up to over the past month.
Recent years saw a number of supply chain attacks that leverage the increasing use of open source during software development, which is facilitated by dependency managers that automatically resolve, download and install hundreds of open source packages throughout the software life cycle.
In related news, the LineageOS Android distribution announced that a hacker had access to the infrastructure of their servers after exploiting an unpatched vulnerability.
Marcin Jachymiak of the Sia decentralised cloud storage platform posted on their blog that their siac and siad utilities can now be built reproducibly:
This means that anyone can recreate the same binaries produced from our official release process. Now anyone can verify that the release binaries were created using the source code we say they were created from. No single person or computer needs to be trusted when producing the binaries now, which greatly reduces the attack surface for Sia users.
Synchronicity is a distributed build system for Rust build artifacts which have been published to crates.io. The goal of Synchronicity is to provide a distributed binary transparency system which is independent of any central operator.
The Comparison of Linux distributions article on Wikipedia now features a Reproducible Builds column indicating whether distributions approach and progress towards achieving reproducible builds.
Distribution work
In Debian this month:
Paul Wise continued a discussion that was started in February regarding the storing and distribution of build logs and other related artifacts and their relationship to reproducible builds. For example, the binutils package ships its own, unreproducible, log files in its binary packages. It was followed-up by replies from Chris Lamb and Matthias Klose.
diffoscope
Chris Lamb made the changes listed below to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. He also prepared and uploaded versions 142, 143, 144, 145 and 146 to Debian, PyPI, etc.
Comparison improvements:
Improve fuzzy matching of JSON files as file now supports recognising JSON data. (#106)
Refactor .changes and .buildinfo handling to show all details (including the GnuPG header and footer components) even when referenced files are not present. (#122)
Use our BuildinfoFile comparator (etc.) regardless of whether the associated files (such as the orig.tar.gz and the .deb) are present. []
Include GnuPG signature data when comparing .buildinfo, .changes, etc. []
Add support for printing Android APK signatures via apksigner(1). (#121)
Identify iOS App Zip archive data as .zip files. (#116)
Don t print a traceback if we pass a single, missing argument to diffoscope (eg. a JSON diff to re-load). []
Correct differences typo in the ApkFile handler. (#127)
Output improvements:
Never emit the same id="foo" anchor reference twice in the HTML output, otherwise identically-named parts will not be able to linked to via a #foo anchor. (#120)
Never emit an empty id anchor either; it is not possible to link to #. []
Don t pretty-print the output when using the --json presenter; it will usually be too complicated to be readable by the human anyway. []
Use the SHA256 over MD5 hash when generating page names for the HTML directory-style presenter. (#124)
Reporting improvements:
Clarify the message when we truncate the number of lines to standard error [] and reduce the number of maximum lines printed to 25 as usually the error is obvious by then [].
Print the amount of free space that we have available in our temporary directory as a debugging message. []
Clarify Command [ ] failed with exit code messages to remove duplicate exited with exit but also to note that diffoscope is interpreting this as an error. []
Don t leak the full path of the temporary directory in Command [ ] exited with 1 messages. (#126)
Clarify the warning message when we cannot import the debian Python module. []
Don t repeat stderr from if both commands emit the same output. []
Clarify that an external command emits for both files, otherwise it can look like we are repeating itself when, in reality, it is being run twice. []
Testsuite improvements:
Prevent apksigner test failures due to lack of binfmt_misc, eg. on Salsa CI and elsewhere. []
Install/remove the build-essential during build so we can install the recommended packages from Git. []
Codebase improvements:
Bump the officially required version of Python from 3.5 to 3.6. (#117)
Drop the (default) shell=False keyword argument to subprocess.Popen so that the potentially-unsafe shell=True is more obvious. []
Perform string normalisation in Black [] and include the Black output in the assertion failure too [].
Inline MissingFile s special handling of deb822 to prevent leaking through abstract layers. [][]
Allow a bare try/except block when cleaning up temporary files with respect to the flake8 quality assurance tool. []
Rename in_dsc_path to dsc_in_same_dir to clarify the use of this variable. []
Abstract out the duplicated parts of the debian_fallback class [] and add descriptions for the file types. []
Various commenting and internal documentation improvements. [][]
Rename the Openssl command class to OpenSSLPKCS7 to accommodate other command names with this prefix. []
Misc:
Rename the --debugger command-line argument to --pdb. []
Normalise filesystem stat(2) birth times (ie. st_birthtime) in the same way we do with the stat(1) command s Access: and Change: times to fix a nondeterministic build failure in GNU Guix. (#74)
Ignore case when ordering our file format descriptions. []
Drop, add and tidy various module imports. [][][][]
In addition:
Jean-Romain Garnier fixed a general issue where, for example, LibarchiveMember s has_same_content method was called regardless of the underlying type of file. []
Daniel Fullmer fixed an issue where some filesystems could only be mounted read-only. (!49)
Emanuel Bronshtein provided a patch to prevent a build of the Docker image containing parts of the build s. (#123)
Mattia Rizzolo added an entry to debian/py3dist-overrides to ensure the rpm-python module is used in package dependencies (#89) and moved to using the new execute_after_* and execute_before_* Debhelper rules [].
Use Jekyll s absolute_url and relative_url where possible [][] and move a number of configuration variables to _config.yml [][].
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
mescc-tools: Inherit CFLAGS in a Makefile, allowing -ffile-prefix-map/-fdebug-prefix-map to sanitise build paths (merged upstream).
Other tools
Elsewhere in our tooling:
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. In May, Chris Lamb uploaded version 1.8.1-1 to Debian unstable and Bernhard M. Wiedemann fixed an off-by-one error when parsing PNG image modification times. (#16)
In disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, Chris Lamb replaced the term dirents in place of directory entries in human-readable output/log messages [] and used the astyle source code formatter with the default settings to the main disorderfs.cpp source file [].
Holger Levsen bumped the debhelper-compat level to 13 in disorderfs [] and reprotest [], and for the GNU Guix distribution Vagrant Cascadian updated the versions of disorderfs to version 0.5.10 [] and diffoscope to version 145 [].
Project documentation & website
Carl Dong:
Clarify some potential confusion around GCC libtool. []
Sort our Academic publications page by publication year [] and add Trusting Trust and Fully Countering Trusting Trust through Diverse Double-Compiling [].
Testing framework
We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced. Holger Levsen made the following changes:
System health status:
Improve page description. []
Add more weight to proxy failures. []
More verbose debug/failure messages. [][][]
Work around strangeness in the Bash shell let VARIABLE=0 exits with an error. []
Fail loudly if there are more than three .buildinfo files with the same name. []
Fix a typo which prevented /usr merge variation on Debian unstable. []
Temporarily ignore PHP s horde](https://www.horde.org/) packages in Debian bullseye. []
Document how to reboot all nodes in parallel, working around molly-guard. []
Further work on a Debian package rebuilder:
Workaround and document various issues in the debrebuild script. [][][][]
Improve output in the case of errors. [][][][]
Improve documentation and future goals [][][][], in particular documentiing two real world tests case for an impossible to recreate build environment [].
Find the right source package to rebuild. []
Increase the frequency we run the script. [][][][]
Improve downloading and selection of the sources to build. [][][]
Improve version string handling.. []
Handle build failures better. []. []. []
Also consider architecture all .buildinfo files. [][]
In addition:
kpcyrd, for Alpine Linux, updated the alpine_schroot.sh script now that a patch for abuild had been released upstream. []
Alexander Couzens of the OpenWrt project renamed the brcm47xx target to bcm47xx. []
Mattia Rizzolo fixed the printing of the build environment during the second build [][][] and made a number of improvements to the script that deploys Jenkins across our infrastructure [][][].
Lastly, Vagrant Cascadian clarified in the documentation that you need to be user jenkins to run the blacklist command [] and the usual build node maintenance was performed was performed by Holger Levsen [][][], Mattia Rizzolo [][] and Vagrant Cascadian [][][].
To make the results accessible, storable and create tools around them, they should all follow the same schema, a reproducible builds verification format. The format tries to be as generic as possible to cover all open source projects offering precompiled source code. It stores the rebuilder results of what is reproducible and what not.
Do you own your Bitcoins or do you trust that your app allows you to use your coins while they are actually controlled by them ? Do you have a backup? Do they have a copy they didn t tell you about? Did anybody check the wallet for deliberate backdoors or vulnerabilities? Could anybody check the wallet for those?
Elsewhere, Leo had posted instructions on his attempts to reproduce the binaries for the BlueWallet Bitcoin wallet for iOS and Android platforms.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
This month s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.
Ubuntu Focal Fossa 20.04 was released two days ago, so I took the opportunity yesterday
and this morning to upgrade my VPS from Ubuntu 18.04 to 20.04. The VPS provides:
SMTP via Postfix
Spam filtering via rspamd
HTTP(S) via nginx and letsencrypt (certbot)
Weechat relay
OpenVPN server
Shadowsocks proxy
Unbound recursive DNS resolver, for the spam filtering
I rebooted one more time than necessary, though, as my cloud provider
Hetzner
recently started offering 2nd generation EPYC instances which I upgraded to
from my Skylake Xeon based instance. I switched from the CX21 for 5.83 /mo
to the CPX11 for 4.15 /mo. This involved a RAM downgrade - from 4GB to 2GB,
but that s fine, the maximum usage I saw was about 1.3 GB when running
dose-distcheck (running hourly); and it s good for everyone that AMD is
giving Intel some good competition, I think.
Anyway, to get back to the distribution upgrade - it was fairly boring. I
started yesterday by taking a copy of the server and launching it locally
in a lxd container, and then tested the upgrade in there; to make sure I m
prepared for the real thing :)
I got a confusing prompt from postfix as to which site I m operating
(which is a normal prompt, but I don t know why I see it on an upgrade);
and a few config files I had changed locally.
As the server is managed by ansible, I just installed the distribution
config files and dropped my changes (setting DPkg::Options "--force-confnew"; ;" in apt.conf),
and then after the upgrade, ran ansible to redeploy the changes (after checking
what changes it would do and adjusting a few things).
There are two remaining flaws:
I run rspamd from the upstream repository, and that s not built for focal
yet. So I m still using the bionic binary, and have to keep bionic s icu 60
and libhyperscan4 around for it.
This is still preventing CI of the ansible config from passing for focal,
because it won t have the needed bionic packages around.
I run weechat from the upstream repository, and apt can t tell the versions
apart. Well, it can for the repositories, because they have Size fields -
but status does not. Hence, it merges the installed version with the
first repository it sees.
What happens is that it installs from weechat.org, but then it believes the installed version
is from archive.ubuntu.com and replaces it each dist-upgrade.
I worked around it by moving the weechat.org repo to the front of sources.list,
so that the it gets merged with that instead of the archive.ubuntu.com one, as
it should be, but that s a bit ugly.
I also should start the migration to EC certificates for TLS, and 0-RTT handshakes,
so that the initial visit experience is faster. I guess I ll have to move away
from certbot for that, but I have not investigated this recently.
Yes, that's quite a mouthful! That magic selector is long in that
way because it needs a special syntax (specifically the
.anarcat.user suffix) for Debian to be happy. The -debian
string is to tell me where the key is published. The marcos
prefix is to remind me where the private is used.
(nsp.dnsnode.net being one of the NS records of the
debian.org zone.)
If all goes well, the tests should pass when sending from your server
as anarcat@debian.org.
Testing
Test messages can be sent to dkimvalidator, mail-tester.com
or check-auth@verifier.port25.com. Those tools will run Spamassassin
on the received emails and report the results. What you are looking
for is:
-0.1 DKIM_VALID: Message has at least one valid DKIM or DK
signature
-0.1 DKIM_VALID_AU: Message has a valid DKIM or DK signature from
author's domain
-0.1 DKIM_VALID_EF: Message has a valid DKIM or DK signature from
envelope-from domain
If one of those is missing, then you are doing something wrong and
your "spamminess" score will be worse. The latter is especially tricky
as it validates the "Envelope From", which is the MAIL FROM: header
as sent by the originating MTA, which you see as from=<> in the
postfix lost.
The following will happen anyways, as soon as you have a signature,
that's normal:
0.1 DKIM_SIGNED: Message has a DKIM or DK signature, not necessarily valid
And this might happen if you have a ADSP record but do not correctly
sign the message with a domain field that matches the record:
1.1 DKIM_ADSP_ALL No valid author signature, domain signs all mail
That's bad and will affect your spam core badly. I fixed that issue by
using a wildcard key in the key table:
I am trying to avoid bringing coronovirus into my house on anything,
and I also don't want to sterilize a lot of stuff. (Tedious and easy to
make a mistake.) Currently it seems that the best approach is to leave
stuff to sit undisturbed someplace safe for long enough for the virus to
degrade away.
Following that policy, I've quickly ended up with a porch full of stuff
in different stages of quarantine, and I am quickly losing track of how
long things have been in quarantine. If you have the same problem,
here is a solution:
https://quarantimer.app/
Open it on your mobile device, and you can take photos of each thing,
select the kind of surfaces it has, and it will track the quarantine time
for you. You can share the link to other devices or other people to
collaborate.
I anticipate the javascript and css will improve, but it's good enough for
now. I will provide this website until the crisis is over. Of course,
it's free software and you can also host your own.
If this seems useful, please tell your friends and family about it.
Be well!
This is made possible by my supporters on
Patreon, particularly Jake Vosloo.
A new version of RcppAPT our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, commands and their cache powering Debian, Ubuntu and the like is now on CRAN.
RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.
This new version corrects builds failures under the new and shiny Apt 2.0 release (and the pre-releases like the 1.9.* series in Ubuntu) as some header files moved around. My thanks to Kurt Hornik for the heads-up. I accomodated the change in the (very simple and shell-based) configure script by a) asking pkg-config about the version of pkg-apt and then using that to b) compare to a threshold value of 1.9.0 and c) setting another compiler #define if needed so that d) these headers could get included if defined. The neat part is that a) and b) are done in an R one-liner, and the whole script is still in shell. Now, CRAN being CRAN, I now split the script into two: one almost empty one not using bash that passes the omg but bash is not portable test, and which calls a second bash script doing the work. Fun and games
The full set of changes follows.
Changes in version 0.0.6 (2020-03-14)
Accomodate Apt 2.0 code changes by including more header files
Change is backwards compatible and conditional
Added configure call using pkg-config and package version comparison (using R) to determine if the define is needed
Softened unit tests as we cannot assume optional source deb information to be present, so demo code runs but zero results tolerated
After brewing in experimental for a while, and getting a first outing in
the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable.
1.10 would be a boring, weird number, eh?
Compared to the 1.8 series, the APT 2.0 series features several new features,
as well as improvements in performance, hardening. A lot of code has been
removed as well, reducing the size of the library.
Highlighted Changes Since 1.8
New Features
Commands accepting package names now accept aptitude-style patterns. The
syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for
more details.
apt(8) now waits for the dpkg locks - indefinitely, when connected
to a tty, or for 120s otherwise.
When apt cannot acquire the lock, it prints the name and pid of the process
that currently holds the lock.
A new satisfy command has been added to apt(8) and apt-get(8)
Pins can now be specified by source package, by prepending src: to the
name of the package, e.g.:
Package: src:apt
Pin: version 2.0.0
Pin-Priority: 990
Will pin all binaries of the native architecture produced by the source
package apt to version 2.0.0. To pin packages across all architectures,
append :any.
Performance
APT now uses libgcrypt for hashing instead of embedded reference
implementations of MD5, SHA1, and SHA2 hash families.
Distribution of rred and decompression work during update has been
improved to take into account the backlog instead of randomly
assigning a worker, which should yield higher parallelization.
Incompatibilities
The apt(8) command no longer accepts regular expressions or wildcards as
package arguments, use patterns (see New Features).
Hardening
Credentials specified in auth.conf now only apply to HTTPS sources,
preventing malicious actors from reading credentials after they redirected
users from a HTTP source to an http url matching the credentials in
auth.conf. Another protocol can be specified, see apt_auth.conf(5) for
the syntax.
Developer changes
A more extensible cache format, allowing us to add new fields without
breaking the ABI
All code marked as deprecated in 1.8 has been removed
Implementations of CRC16, MD5, SHA1, SHA2 have been removed
The apt-inst library has been merged into the apt-pkg library.
apt-pkg can now be found by pkg-config
The apt-pkg library now compiles with hidden visibility by default.
Pointers inside the cache are now statically typed. They cannot be
compared against integers (except 0 via nullptr) anymore.
python-apt 2.0
python-apt 2.0 is not yet ready, I m hoping to add a new cleaner
API for cache access before making the jump from 1.9 to 2.0 versioning.
libept 1.2
I ve moved the maintenance of libept to the APT team. We need to investigate
how to EOL this properly and provide facilities inside APT itself to
replace it. There are no plans to provide new features, only bugfixes
/ rebuilds for new apt versions.
This post has it all. Flotillas of sailboats, peer-to-peer wikis, games,
and de-frogging. But, I need to start by talking about some tech you may not
have heard of yet...
Scuttlebutt is way for friends to share feeds
of content-addressed messages, peer-to-peer. Most Scuttlebutt clients
currently look something like facebook, but there are also github clones,
chess games, etc. Many private encrypted conversations going on.
All entirely decentralized.
(My scuttlebutt feed can be viewed here)
Annah is a purely
functional, strongly typed language. Its design allows individual atoms
of the language to be put in content-addressed storage, right down to
data types. So the value True and a hash of the definition of what
True is can both be treated the same by Annah's compiler.
(Not to be confused with my sister, Anna, or part of the Debian Installer
with the same name that I wrote long ago.)
So, how could these be combined together, and what might the result look
like?
Well, I could start by posting a Scuttlebutt message that defines what
True is. And another
Scuttlebutt message defining
False. And then,
another Scuttlebutt message to
define the AND function,
which would link to my messages for True and False. Continue this until
I've built up enough Annah code to write some almost useful programs.
Annah can't do any IO on its own (though it can model IO similarly to how
Haskell does), so for programs to be actually useful, there needs to be
Scuttlebutt client support. The way typing works in Annah, a program's type
can be expressed as a Scuttlebutt link. So a Scuttlebutt client that wants
to run Annah programs of a particular type can pick out programs that link
to that type, and will know what type of data the program consumes and
produces.
Here are a few ideas of what could be built, with fairly simple client-side
support for different types of Annah programs...
Shared dashboards.
Boats in a flotilla are communicating via Scuttlebutt,
and want to share a map of their planned courses. Coders collaborating
via Scuttlebutt want to see an overview of the state of their project.
For this, the Scuttlebutt client needs a way to run a selected Annah
program of type Dashboard, and display its output like a Scuttlebutt
message, in a dashboard window. The dashboard message gets updated
whenever other Scuttlebutt messages come in. The Annah program picks out
the messages it's interested in, and generates the dashboard message.
So, send a message updating your boat's position, and everyone sees it
update on the map. Send a message with updated weather forecasts as
they're received, and everyone can see the storm developing.
Send another message updating a waypoint to avoid the storm,
and steady as you go...
The coders, meanwhile, probably tweak their dashboard's code every day.
As they add git-ssb repos, they make the dashboard display an
overview of their bugs. They get CI systems hooked in and feeding
messages to Scuttlebutt, and make the dashboard go green or red. They
make the dashboard A-B test itself to pick the right shade of red.
And so on...
The dashboard program is stored in Scuttlebutt so everyone is on the same
page, and the most recent version of it posted by a team member gets
used. (Just have the old version of the program notice when there's a
newer version, and run that one..)
(Also could be used in disaster response scenarios, where the data
and visualization tools get built up on the fly in response to local needs,
and are shared peer-to-peer in areas without internet.)
Smart hyperlinks. When a hyperlink in a Scuttlebutt message points to a
Annah program, optionally with some Annah data, clicking on it can
run the program and display the messages that the program generates.
This is the most basic way a Scuttlebutt client could support Annah
programs, and it could be used for tons of stuff. A few examples:
Hiding spoilers.
Click on the link and it'll display a spoiler about a book/movie.
A link to whatever I was talking about one year ago today.
That opens different messages as time goes by. Put it in your Scuttlebutt
profile or something. (Requires a way for Annah to get the current
date, which it normally has no way of accessing.)
Choose your own adventure or twine style games.
Click on the link and the program starts the game, displaying
links to choose between, and so on.
Links to custom views.
For example, a link could lead to a combination of messages from
several different, related channels. Or could filter messages in some
way.
Collaborative filtering. Suppose I don't want to see
frog-related memes in my Scuttlebutt client. I can write a
Annah program that calculates a message's frogginess, and outputs a
Filtered Message. It can leave a message unchanged, or filter it out,
or perhaps minimize its display. I publish the Annah program on my feed,
and tell my Scuttlebutt client to filter all messages through it before
displaying them to me.
I published the program in my Scuttlebutt feed, and so my friends
can use it too. They can build other filtering functions for other
stuff (such an an excess of orange in photos), and integrate my
frog filter into their filter program by simply composing the two.
If I like their filter, I can switch my client to using it. Or not.
Filtering is thus subjective, like Scuttlebutt, and the subjectivity is
expressed by picking the filter you want to use, or developing a
better one.
Wiki pages. Scuttlebutt is built on immutable append-only logs; it
doesn't have editable wiki pages. But they can be built on top using
Annah.
A smart link to a wiki page is a reference to the Annah program
that renders it. Of course being a wiki, there will be more smart
links on the wiki page going to other wiki pages, and so on.
The wiki page includes a smart link to edit it. The editor needs basic
form support in the Scuttlebutt client; when the edited wiki page is
posted, the Annah program diffs it against the previous version and
generates an Edit which gets posted to the user's feed. Rendering the
page is just a matter of finding the Edit messages for it from
people who are allowed to edit it, and combining them.
Anyone can fork a wiki page by posting an Edit to their feed. And can
then post a smart link to their fork of the page.
And anyone can merge other forks into their wiki page (this posts a
control message that makes the Annah program implementing the wiki accept
those forks' Edit messages). Or grant other users permission to edit
the wiki page (another control message). Or grant other users
permissions to grant other users permissions.
There are lots of different ways you might want your wiki to work.
No one wiki implementation, but lots of Annah programs. Others
can interact with your wiki using the program you picked, or fork it and
even switch the program used. Subjectivity again.
User-defined board games. The Scuttlebutt client finds
Scuttlebutt messages containing Annah programs of type Game,
and generates a tab with a list of available games.
The players of a particular game all experience the same game interface,
because the code for it is part of their shared Scuttlebutt message pool,
and the code to use gets agreed on at the start of a game.
To play a game, the Scuttlebutt client runs the Annah program, which
generates a description of the current contents of the game board.
So, for chess, use Annah to define a ChessMove data type,
and the Annah program takes the feeds of the two players, looks
for messages containing a ChessMove, and builds up a description
of the chess board.
As well as the pieces on the game board, the game board description
includes Annah functions that get called when the user moves a
game piece. That generates a new ChessMove which gets recorded
in the user's Scuttlebutt feed.
This could support a wide variety of board games. If you don't mind the
possibility that your opponent might cheat by peeking at the random seed,
even games involving things like random card shuffles and dice rolls
could be built. Also there can be games like Core Wars where the gamers
themselves write Annah programs to run inside the game.
Variants of games can be developed by modifying and reusing game
programs. For example, timed chess is just the chess program
with an added check on move time, and time clock display.
Decentralized chat bots. Chat bots are all the rage (or were a few
months ago, tech fads move fast), but in a decentralized system like
Scuttlebutt, a bot running on a server somewhere would be a ugly point
of centralization. Instead, write a Annah program for the bot.
To launch the bot, publish a message in your own personal Scuttlebutt
feed that contains the bot's program, and a nonce.
The user's Scuttlebutt client takes care of the rest. It looks for messages
with bot programs, and runs the bot's program. This generates or updates
a Scuttlebutt message feed for the bot.
The bot's program signs the messages in its feed using a private key
that's generated by combining the user's public key, and the bot's nonce.
So, the bot has one feed per user it talks to, with deterministic
content, which avoids a problem with forking a Scuttlebutt feed.
The bot-generated messages can be stored in the Scuttlebutt database like any
other messages and replicated around. The bot appears as if it were a
Scuttlebutt user. But you can have conversations with it while you're
offline.
(The careful reader may have noticed that deeply private messages sent to
the bot can be decrypted by anyone! This bot thing is probably a bad idea
really, but maybe the bot fad is over anyway. We can only hope. It's
important that there be at least one bad idea in this list..)
This kind of extensibility in a peer-to-peer system is exciting! With these
new systems, we can consider lessons from the world wide web and replicate
some of the good parts, while avoiding the bad. Javascript has been both
good and bad for the web. The extensibility is great, and yet it's a
neverending security and privacy nightmare, and it ties web pages ever more
tightly to programs hidden away on servers. I believe that Annah combined
with Scuttlebutt will comprehensively avoid those problems. Shall we build it?
This exploration was sponsored by Jake Vosloo on
Patreon.
Debian
devscripts
Before deciding to take an indefinite hiatus from devscripts, I prepared one more upload merging various contributed patches and a bit of last minute cleanup.
build-rdeps
Updated build-rdeps to work with compressed apt indices. (Debian bug #698240)
Added support for Build-Arch-Conflicts,Depends to build-rdeps. (adc87981)
Merged Andreas Henriksson's patch for setting remote.<name>.push-url when using debcheckout to clone a git repository. (Debian bug #753838)
debsign
Updated bash completion for gpg keys to use gpg --with-colons, instead of manually parsing gpg -K output.
Aside from being the Right Way to get machine parseable information out of gpg, it fixed completion when gpg is a 2.x version. (Debian bug #837380)
I also setup integration with Travis CI to hopefully catch issues sooner than "while preparing an upload", as was typically the case before.
Anyone with push access to the Debian/devscripts GitHub repo can take advantage of this to test out changes, or keep the development branches up to date.
In the process, I was able to make some improvements to travis.debian.net, namely support for DEB_BUILD_PROFILES and using a separate, minimal docker image for running autopkgtests.
unibilium
Oddly, the mips64el builds were in BD-Uninstallable state, even though luajit's buildd status showed it was built.
Looking further, I noticed the libluajit-5.1 ,-dev binary packages didn't have the mips64el architecture enabled, so I asked for it to be enabled.
msgpack-c
There were a few packages left which would FTBFS if I uploaded msgpack-c 2.x to unstable.
All of the bug reports had either trivial work arounds (i.e., forcing use of the v1 C++ API) or trivial patches.
However, I didn't want to continue waiting for the packages to get fixed since I knew other people had expressed interest in the new msgpack-c.
Trying to avoid making other packages insta-buggy, I NMUed autobahn-cpp with the v1 work around.
That didn't go over well, partly because I didn't send a finalized "Hey, I'd like to get this done and here's my plan to NMU" email.
Based on that feedback, I decided to bump the remaining bugs to "serious" instead of NMUing and upload msgpack-c.
Thanks to Jonas Smedegaard for quickly integrating my proposed fix for libdata-messagepack-perl.
Hopefully, upstream has some time to review the PR soon.
vim
Used the powerpc porterbox to debug and fix a 32-bit integer overflow that was causing test failures.
Asked the vim-perl folks about getting updated runtime files to Bram, after Jakub Wilk filed Debian bug #873755.
This had been fixed 4+ years earlier, but not yet merged back into Vim.
Thanks to Rob Hoelz for pulling things together and sending the updates to Bram.
I've continued to receive feedback from Debian users about their frustration with Vim's new "defaults.vim", both in regards to the actual default settings and its interaction with the system-wide vimrc file.
While I still don't intend to deviate from upstream's behavior, I did push back some more on the existing behavior.
I appreciate Christian Brabandt's effort, as always, to understand the issue at hand and have constructive discussions.
His final suggestion seems like it will resolve the system vimrc interaction, so hopefully Bram is receptive to it.
Thanks to a nudge from Salvatore Bonaccorso and Moritz M hlenhoff, I uploaded 2:8.0.0197-4+deb9u1 which fixes CVE-2017-11109.
I had intended to do this much sooner, but it fell through the cracks.
Due to Adam Barratt's quick responses, this should make it into the upcoming Stretch 9.2 release.
subversion
Started work on updating the packaging
Converted to 3.0 (quilt) source format
Updated to debhelper 10 compat
Initial attempts at converting to a dh rules file
Running into various problems here and still trying to figure out whether they're in the upstream build system, Debian's patches, or both.
neovim
Worked with Niko Dittmann to fix build failures Niko was experiencing on OpenBSD 6.1 #7298
Merged upstream Vim patches into neovim from various contributors
Discussed focus detection behavior after a recent change in the implementation (#7221)
While testing focus detection in various terminal emulators, I noticed pangoterm didn't support this.
I submitted a merge request on libvterm to provide an API for reporting focus changes.
If that's merged, it will be trivial for pangoterm to notify applications when the terminal has focus.
Fixed a bug in our tooling around merging Vim patches, which was causing it to incorrectly drop certain files from the patches. #7328
Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):
Submitted a pull request to Quadrapassel (the Gnome version of Tetris) to start a new game when the pause button is pressed outside of a game. This means you would no longer have to use the mouse to start a new game. [...]
Made a large number of improvements to AptFS my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders including moving away from manual parsing of package lists [...] and numerous code tidying/refactoring changes.
Sent a small patch to django-sitetree, a Django library for menu and breadcrumb navigation elements to not mask test exit codes from the surrounding shell. [...]
Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
Merged a pull request from James McCoy to pass DEB_BUILD_PROFILES through to the build. [...]
Workaround Travis CI's HTTP proxy which does not appear to support SRV records. [...]
Run debc from devscripts if the build was successful [...] and output the .buildinfo file if it exists [...].
Fixed a few issues in local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool:
Fix an issue where file permissions from the remote could result in a local archive that was impossible to access. [...]
Clear out empty directories on the local repository. [...]
Updated django-staticfiles-dotd, my Django staticfiles adaptor to concatentate static media in .d-style directories to support Python 3.x by using bytes objects (commit) and move away from monkeypatch as it does not have a Python 3.x port yet (commit).
Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.
The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.
I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.
This month I:
Published a short blog post about how to determine which packages on your system are reproducible. [...]
Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
Within Debian:
Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
Submitted the following patches to fix reproducibility-related toolchain issues:
gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.
Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
New features:
Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
Print a message if we are reading data from standard input. [...]
Bug fixes:
Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
Testing:
Make failing some critical flake8 tests result in a failed build. [...]
Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.
strip-nondeterminism
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.
Log which handler processed a file. (#876140). [...]
disorderfs
disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.
Lintian
I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers:
Add 4.1.1 as a supported Standards-Version. (#875509)
Warn about Django libraries that do not depend on Django itself. (#877292)
Add a --list-tags option to print all tags Lintian knows about. (#779675)
Prevent false positives in copyright-year-in-future when matching URLs, the Tcl license (#876360) and "meta" statements such as "Original Author" (#873323).
Update the description of unknown-testsuite to reflect that autopkgtest is not the only valid value. (#876003)
Apply patch from Guillem Jover to add more package section mappings. (#874121)
Update the data/fields/perl-provides and data/fields/virtual-packages files from the archive; the latter fixes a false positive in bacula-director. (#835120)
Apply a patch from Jakub Wilk to prevent test failures on armhf/arm64, etc. (#877147)
Apply patch from Gianfranco Costamagnato fix a failing LFS test on 32-bit architectures. (#876343)
Correct Depends of python2.7python3 in Python 3 test package.
Update private/generate-tag-summary to ensure that git-describe(1) will always emit 7 hexadecimal digits as the abbreviated object name, use deb.debian.org as the default mirror, and update the remote locations of Contents-<arch> files.
Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
4.0.2-2 Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
4.0.2-2~bpo9+1 Upload to stretch-backports.
aptfs (0.11.0-1) New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
lintian (2.5.53, 2.5.54) New upstream releases. (Documented in more detail above.)
Reviews of unreproducible packages
1 package reviews was added, 49 have been updated and 54 have been removed in this week,
adding to our knowledge about identified issues.
One issue type was updated:
tests.reproducible-builds.org
Vagrant Cascadian and Holger Levsen:
Re-add and armhf build node that had been disabled due to
performance issues, but works linux 4.14-rc1 now! #876212
Holger Levsen:
Use botch from stretch to fix the jenkins job which create the package sets. (botch is currently uninstallable in sid and from pre-stretch-release times we used a sid schroot to install and use botch.)
Misc.
This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.
I've wanted to start making one of these posts for a few months but have
struggled to find the time. But it seems like a good idea, particularly
since I get more done when I write down what I do, so you all get a rather
belated one. This covers July and August; hopefully the September one
will come closer to the end of September.
Debian
August was DebConf, which included a ton of Policy work thanks to Sean
Whitton's energy and encouragement. During DebConf, we incorporated work
from Hideki Yamane to convert Policy to reStructuredText, which has
already made it far easier to maintain. (Thanks also to David Bremner for
a lot of proofreading of the result.) We also did a massive bug triage
and closed a ton of older bugs on which there had been no forward progress
for many years.
After DebConf, as expected, we flushed out various bugs in the
reStructuredText conversion and build infrastructure. I fixed a variety
of build and packaging issues and started doing some more formatting
cleanup, including moving some footnotes to make the resulting document
more readable.
During July and August, partly at DebConf and partly not, I also merged
wording fixes for seven bugs and proposed wording (not yet finished) for
three more, as well as participated in various Policy discussions.
Policy was nearly all of my Debian work over these two months, but I did
upload a new version of the webauth
package to build with OpenSSL 1.1 and drop transitional packages.
Kerberos
I still haven't decided my long-term strategy with the Kerberos packages I
maintain. My personal use of Kerberos is now fairly marginal, but I still
care a lot about the software and can't convince myself to give it up.
This month, I started dusting off pam-krb5 in preparation for a new release. There's been an open issue
for a while around defer_pwchange support in Heimdal, and I spent
some time on that and tracked it down to an upstream bug in Heimdal as
well as a few issues in pam-krb5. The pam-krb5 issues are now fixed in
Git, but I haven't gotten any response upstream from the Heimdal bug
report. I also dusted off three old Heimdal patches and submitted them as
upstream merge requests and reported some more deficiencies I found in
FAST support. On the pam-krb5 front, I updated the test suite for the
current version of Heimdal (which changed some of the prompting) and
updated the portability support code, but haven't yet pulled the trigger
on a new release.
Other Software
I merged a couple of pull requests in podlators, one to fix various typos
(thanks, Jakub Wilk) and one to change the formatting of man page
references and function names to match the current Linux manual page
standard (thanks, Guillem Jover). I also documented a bad interaction
with line-buffered output in the Term::ANSIColor man page. Neither of
these have seen a new release yet.
This is doing so much in so little space and with so little fuss! It's
completely defining two different versions of a Host. One version is the
Installer, which in turn installs the Target. The code above provides
all the information that propellor needs to convert a copy of
the Installer into the Target, which it can do very efficiently. For
example, it knows that the default user account should be deleted, and a
new user account created based on the user's input of their name.
The germ of this idea comes from a short presentation I made about
propellor in Portland several years ago. I was describing
RevertableProperty, and Joachim Breitner pointed out that to use it, the
user essentially has to keep track of the evolution of their Host in
their head. It would be better for propellor to know what past versions
looked like, so it can know when a RevertableProperty needs to be
reverted.
I didn't see a way to address the objection for years. I was hung up
on the problem that propellor's properties can't be compared for equality,
because functions can't be compared for equality (generally). And on the
problem that it would be hard for propellor to pull old versions of a Host
out of git. But then I ran into the situation where I needed these two
closely related hosts to be defined in a single file, and it all fell
into place.
The basic idea is that propellor first reverts all the revertible
properties for other versions. Then it ensures the property for the current
version.
Another use for it would be if you wanted to be able to roll back changes
to a Host. For example:
foos :: Versioned Int Host
foos ver = host "foo"
& hostname "foo.example.com"
& ver ( (== 1) --> Apache.modEnabled "mpm_worker"
< > (>= 2) --> Apache.modEnabled "mpm_event"
)
& ver ( (>= 3) --> Apt.unattendedUpgrades )
foo :: Host
foo = foos version (4 :: Int)
Notice that I've embedded a small DSL for versioning into the propellor
config file syntax. While implementing versioning took all day,
that part was super easy; Haskell config files win again!
API documentation for this feature
PS: Not really FRP, probably. But time-varying in a FRP-like way.
Development of this was sponsored by Jake Vosloo on
Patreon.
I recently finished a paper that presents a novel social computing system called the Wikipedia Adventure. The system was a gamified tutorial for new Wikipedia editors. Working with the tutorial creators, we conducted both a survey of its users and a randomized field experiment testing its effectiveness in encouraging subsequent contributions. We found that although users loved it, it did not affect subsequent participation rates.
A major concern that many online communities face is how to attract and retain new contributors. Despite it s success, Wikipedia is no different. In fact, researchers have shown that after experiencing a massive initial surge in activity, the number of active editors on Wikipedia has been in slow decline since 2007.
Research has attributed a large part of this decline to the hostile environment that newcomers experience when begin contributing. New editors often attempt to make contributions which are subsequently reverted by more experienced editors for not following Wikipedia s increasingly long list of rules and guidelines for effective participation.
This problem has led many researchers and Wikipedians to wonder how to more effectively onboard newcomers to the community. How do you ensure that new editors Wikipedia quickly gain the knowledge they need in order to make contributions that are in line with community norms?
To this end, Jake Orlowitz and Jonathan Morgan from the Wikimedia Foundation worked with a team of Wikipedians to create a structured, interactive tutorial called The Wikipedia Adventure. The idea behind this system was that new editors would be invited to use it shortly after creating a new account on Wikipedia, and it would provide a step-by-step overview of the basics of editing.
The Wikipedia Adventure was designed to address issues that new editors frequently encountered while learning how to contribute to Wikipedia. It is structured into different missions that guide users through various aspects of participation on Wikipedia, including how to communicate with other editors, how to cite sources, and how to ensure that edits present a neutral point of view. The sequence of the missions gives newbies an overview of what they need to know instead of having to figure everything out themselves. Additionally, the theme and tone of the tutorial sought to engage new users, rather than just redirecting them to the troves of policy pages.
Those who play the tutorial receive automated badges on their user page for every mission they complete. This signals to veteran editors that the user is acting in good-faith by attempting to learn the norms of Wikipedia.
Once the system was built, we were interested in knowing whether people enjoyed using it and found it helpful. So we conducted a survey asking editors who played the Wikipedia Adventure a number of questions about its design and educational effectiveness. Overall, we found that users had a very favorable opinion of the system and found it useful.
We were heartened by these results. We d sought to build an orientation system that was engaging and educational, and our survey responses suggested that we succeeded on that front. This led us to ask the question could an intervention like the Wikipedia Adventure help reverse the trend of a declining editor base on Wikipedia? In particular, would exposing new editors to the Wikipedia Adventure lead them to make more contributions to the community?
To find out, we conducted a field experiment on a population of new editors on Wikipedia. We identified 1,967 newly created accounts that passed a basic test of making good-faith edits. We then randomly invited 1,751 of these users via their talk page to play the Wikipedia Adventure. The rest were sent no invitation. Out of those who were invited, 386 completed at least some portion of the tutorial.
We were interested in knowing whether those we invited to play the tutorial (our treatment group) and those we didn t (our control group) contributed differently in the first six months after they created accounts on Wikipedia. Specifically, we wanted to know whether there was a difference in the total number of edits they made to Wikipedia, the number of edits they made to talk pages, and the average quality of their edits as measured by content persistence.
We conducted two kinds of analyses on our dataset. First, we estimated the effect of inviting users to play the Wikipedia Adventure on our three outcomes of interest. Second, we estimated the effect of playing the Wikipedia Adventure, conditional on having been invited to do so, on those same outcomes.
To our surprise, we found that in both cases there were no significant effects on any of the outcomes of interest. Being invited to play the Wikipedia Adventure therefore had no effect on new users volume of participation either on Wikipedia in general, or on talk pages specifically, nor did it have any effect on the average quality of edits made by the users in our study. Despite the very positive feedback that the system received in the survey evaluation stage, it did not produce a significant change in newcomer contribution behavior. We concluded that the system by itself could not reverse the trend of newcomer attrition on Wikipedia.
Why would a system that was received so positively ultimately produce no aggregate effect on newcomer participation? We ve identified a few possible reasons. One is that perhaps a tutorial by itself would not be sufficient to counter hostile behavior that newcomers might experience from experienced editors. Indeed, the friendly, welcoming tone of the Wikipedia Adventure might contrast with strongly worded messages that new editors receive from veteran editors or bots. Another explanation might be that users enjoyed playing the Wikipedia Adventure, but did not enjoy editing Wikipedia. After all, the two activities draw on different kinds of motivations. Finally, the system required new users to choose to play the tutorial. Maybe people who chose to play would have gone on to edit in similar ways without the tutorial.
Ultimately, this work shows us the importance of testing systems outside of lab studies. The Wikipedia Adventure was built by community members to address known gaps in the onboarding process, and our survey showed that users responded well to its design.
While it would have been easy to declare victory at that stage, the field deployment study painted a different picture. Systems like the Wikipedia Adventure may inform the design of future orientation systems. That said, more profound changes to the interface or modes of interaction between editors might also be needed to increase contributions from newcomers.
rejected deletion (despite valid reason) of
borgbackup (no other screenshots)
rejected deletion of
synaptic (garbage in reason field)
Administration
Debian:
discuss mail bounces with a hoster,
check perms of LE results,
add 1 user to a group,
re-sent some TLS cert expiry mail,
clean up mail bounce flood,
approve some debian.net TLS certs,
do the samhain dance thrice,
end 1 samhain mail flood,
diagnose/fix LDAP update issue,
relay DebConf cert expiry mails,
reboot 2 non-responsive VM,
merged patches for debian.org-sources.debian.org meta-package,